Current Issue : January - March Volume : 2020 Issue Number : 1 Articles : 5 Articles
Daily activities are characterized by an increasing interaction with smart machines that present a certain level of autonomy.\nHowever, the intelligence of such electronic devices is not always transparent for the end user. This study is aimed at assessing\nthe quality of the remote control of a mobile robot whether the artefact exhibits a human-like behavior or not. The bioinspired\nbehavior implemented in the robot is the well-described two-thirds power law. The performance of participants who teleoperate\nthe semiautonomous vehicle implementing the biological law is compared to a manual and nonbiological mode of control. The\nresults show that the time required to complete the path and the number of collisions with obstacles are significantly lower in\nthe biological condition than in the two other conditions. Also, the highest percentage of occurrences of curvilinear or smooth\ntrajectories are obtained when the steering is assisted by an integration of the power law in the robotâ??s way of working. This\nadvanced analysis of the performance based on the naturalness of the movement kinematics provides a refined evaluation of the\nquality of the Human-Machine Interaction (HMI). This finding is consistent with the hypothesis of a relationship between the\npower law and jerk minimization. In addition, the outcome of this study supports the theory of a CNS origin of the power law.\nThe discussion addresses the implications of the anthropocentric approach to enhance the HMI....
The speech entailed in human voice comprises essentially paralinguistic information used in many voice-recognition applications.\nGender voice is considered one of the pivotal parts to be detected from a given voice, a task that involves certain complications. In\norder to distinguish gender from a voice signal, a set of techniques have been employed to determine relevant features to be\nutilized for building a model from a training set. This model is useful for determining the gender (i.e., male or female) from a voice\nsignal. The contributions are three-fold including (i) providing analysis information about well-known voice signal features using\na prominent dataset, (ii) studying various machine learning models of different theoretical families to classify the voice gender,\nand (iii) using three prominent feature selection algorithms to find promisingly optimal features for improving classification\nmodels. The experimental results show the importance of subfeatures over others, which are vital for enhancing the efficiency of\nclassification modelsâ?? performance. Experimentation reveals that the best recall value is equal to 99.97%; the best recall value is\n99.7% for two models of deep learning (DL) and support vector machine (SVM), and with feature selection, the best recall value is\n100% for SVM techniques....
We propose an efficient hand gesture recognition (HGR) algorithm, which can cope with\ntime-dependent data from an inertial measurement unit (IMU) sensor and support real-time learning\nfor various human-machine interface (HMI) applications. Although the data extracted from IMU\nsensors are time-dependent, most existing HGR algorithms do not consider this characteristic,\nwhich results in the degradation of recognition performance. Because the dynamic time warping\n(DTW) technique considers the time-dependent characteristic of IMU sensor data, the recognition\nperformance of DTW-based algorithms is better than that of others. However, the DTW technique\nrequires a very complex learning algorithm, which makes it difficult to support real-time learning.\nTo solve this issue, the proposed HGR algorithm is based on a restricted column energy (RCE)\nneural network, which has a very simple learning scheme in which neurons are activated\nwhen necessary. By replacing the metric calculation of the RCE neural network with DTW distance,\nthe proposed algorithm exhibits superior recognition performance for time-dependent sensor data\nwhile supporting real-time learning. Our verification results on a field-programmable gate array\n(FPGA)-based test platform show that the proposed HGR algorithm can achieve a recognition\naccuracy of 98.6% and supports real-time learning and recognition at an operating frequency of\n150 MHz....
Computational drug repositioning, designed to identify new indications for existing drugs, significantly reduced the cost and time\ninvolved in drug development. Prediction of drug-disease associations is promising for drug repositioning. Recent years have\nwitnessed an increasing number of machine learning-based methods for calculating drug repositioning. In this paper, a novel\nfeature learning method based on Gaussian interaction profile kernel and autoencoder (GIPAE) is proposed for drug-disease\nassociation. In order to further reduce the computation cost, both batch normalization layer and the full-connected layer are\nintroduced to reduce training complexity. The experimental results of 10-fold cross validation indicate that the proposed method\nachieves superior performance on Fdataset and Cdataset with theAUCs of 93.30% and 96.03%, respectively,whichwere higher than\nmany previous computational models. To further assess the accuracy of GIPAE, we conducted case studies on two complex human\ndiseases. The top 20 drugs predicted, 14 obesity-related drugs, and 11 drugs related to Alzheimer's disease were validated in the\nCTD database.The results of cross validation and case studies indicated that GIPAE is a reliable model for predicting drug-disease\nassociations....
Human motion intention recognition is a key to achieve perfect human-machine coordination and wearing comfort of wearable\nrobots. Surface electromyography (sEMG), as a bioelectrical signal, generates prior to the corresponding motion and reflects\nthe human motion intention directly. Thus, a better human-machine interaction can be achieved by using sEMG based motion\nintention recognition. In this paper, we review and discuss the state of the art of the sEMG based motion intention recognition\nthat is mainly used in detail. According to the method adopted, motion intention recognition is divided into two groups: \nsEMGdrivenmusculoskeletal (MS)model based motion intention recognition and machine learning (ML)model basedmotion intention\nrecognition. The specific models and recognition effects of each study are analyzed and systematically compared. Finally, a\ndiscussion of the existing problems in the current studies, major advances, and future challenges is presented....
Loading....